4,463 research outputs found

    Interaction between high-level and low-level image analysis for semantic video object extraction

    Get PDF
    Authors of articles published in EURASIP Journal on Advances in Signal Processing are the copyright holders of their articles and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate the article, according to the SpringerOpen copyright and license agreement (http://www.springeropen.com/authors/license)

    S-adenosylmethionine and superoxide dismutase 1 synergistically counteract Alzheimer's disease features progression in tgCRND8 mice

    Get PDF
    Recent evidence emphasizes the role of dysregulated one-carbon metabolism in Alzheimer's Disease (AD). Exploiting a nutritional B-vitamin deficiency paradigm, we have previously shown that PSEN1 and BACE1 activity is modulated by one-carbon metabolism, leading to increased amyloid production. We have also demonstrated that S-adenosylmethionine (SAM) supplementation contrasted the AD-like features, induced by B-vitamin deficiency. In the present study, we expanded these observations by investigating the effects of SAM and SOD (Superoxide dismutase) association. TgCRND8 AD mice were fed either with a control or B-vitamin deficient diet, with or without oral supplementation of SAM + SOD. We measured oxidative stress by lipid peroxidation assay, PSEN1 and BACE1 expression by Real-Time Polymerase Chain Reaction (PCR), amyloid deposition by ELISA assays and immunohistochemistry. We found that SAM + SOD supplementation prevents the exacerbation of AD-like features induced by B vitamin deficiency, showing synergistic effects compared to either SAM or SOD alone. SAM + SOD supplementation also contrasts the amyloid deposition typically observed in TgCRND8 mice. Although the mechanisms underlying the beneficial effect of exogenous SOD remain to be elucidated, our findings identify that the combination of SAM + SOD could be carefully considered as co-adjuvant of current AD therapies

    Cast-Gan: Learning to Remove Colour Cast from Underwater Images

    Get PDF
    Underwater images are degraded by blur and colour cast caused by the attenuation of light in water. To remove the colour cast with neural networks, images of the scene taken under white illumination are needed as reference for training, but are generally unavailable. As an alternative, one can use surrogate reference images taken close to the water surface or degraded images synthesised from reference datasets. However, the former still suffer from colour cast and the latter generally have limited colour diversity. To address these problems, we exploit open data and typical colour distributions of objects to create a synthetic image dataset that reflects degradations naturally occurring in underwater photography. We use this dataset to train Cast-GAN, a Generative Adversarial Network whose loss function includes terms that eliminate artefacts that are typical of underwater images enhanced with neural networks. We compare the enhancement results of Cast-GAN with four state-of-the-art methods and validate the cast removal with a subjective evaluation

    Acoustic Sensing From a Multi-Rotor Drone

    Get PDF

    Deep learning assisted time-frequency processing for speech enhancement on drones

    Get PDF
    This article fills the gap between the growing interest in signal processing based on Deep Neural Networks (DNN) and the new application of enhancing speech captured by microphones on a drone. In this context, the quality of the target sound is degraded significantly by the strong ego-noise from the rotating motors and propellers. We present the first work that integrates single-channel and multi-channel DNN-based approaches for speech enhancement on drones. We employ a DNN to estimate the ideal ratio masks at individual time-frequency bins, which are subsequently used to design three potential speech enhancement systems, namely single-channel ego-noise reduction (DNN-S), multi-channel beamforming (DNN-BF), and multi-channel time-frequency spatial filtering (DNN-TF). The main novelty lies in the proposed DNN-TF algorithm, which infers the noise-dominance probabilities at individual time-frequency bins from the DNN-estimated soft masks, and then incorporates them into a time-frequency spatial filtering framework for ego-noise reduction. By jointly exploiting the direction of arrival of the target sound, the time-frequency sparsity of the acoustic signals (speech and ego-noise) and the time-frequency noise-dominance probability, DNN-TF can suppress the ego-noise effectively in scenarios with very low signal-to-noise ratios (e.g. SNR lower than -15 dB), especially when the direction of the target sound is close to that of a source of the ego-noise. Experiments with real and simulated data show the advantage of DNN-TF over competing methods, including DNN-S, DNN-BF and the state-of-the-art time-frequency spatial filtering

    Networked Computer Vision: The Importance of a Holistic Simulator

    Get PDF

    Microphone-Array Ego-Noise Reduction Algorithms for Auditory Micro Aerial Vehicles

    Get PDF

    Deep learning assisted sound source localization from a flying drone

    Get PDF
    • …
    corecore